INTRODUCTION

Generative Deep Learning

This coursework will explore the use of generative adversarial networks (GANs) to generate synthetic images of flowers. The techniques used within this study will follow the workings of Francois Chollet within the Deep Learning With Python textbook.

GANs a class of deep learning models designed for generating new, previously unseen data that is similar to a given training dataset. GANs consist of two main components: a generator network and a discriminator network. The generator network is responsible for creating new data, while the discriminator network is responsible for distinguishing the generated data from the real data. The two networks are trained simultaneously in an adversarial manner, where the generator tries to create data that can fool the discriminator into thinking it's real, and the discriminator tries to correctly identify the generated data as fake. The training process continues until the generator is able to produce data that is indistinguishable from real data to the discriminator.

The Dataset

The dataset being used is the Oxford 102 Category Flower Dataset, available at https://www.robots.ox.ac.uk/~vgg/data/flowers/102/, containing 102 categories of flowers found in the UK. Each class contains between 40-258 samples with an overall sample size of 8189. To use the images within the model, they must be resized and preprocessed into a tensorflow dataset object that is normalized to squish the pixel values between 0 and 1.

Workflow

To successfully implement a GAN model, the tweaking of varius hyperparameters will take place along with regularization techniques in order to find the most successful model with the intent of producing the most "artistic" or "realistic" looking flower generations. This tuning process includes adjustments to the optimizer, learning rate, layer units, and number of iterations for each model.

The ultimate goal of this research is to identify the specific changes to the GAN model that result in the highest quality generated flower images, as determined by human interpretation.

In [1]:
import numpy as np
import matplotlib.pyplot as plt
import PIL
import os, sys, pathlib
from PIL import Image
import tensorflow  as tf
from tensorflow.keras import layers
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Reshape, UpSampling2D, Conv2D, BatchNormalization, Conv2DTranspose
from tensorflow.keras.layers import LeakyReLU, Dropout, ZeroPadding2D, Flatten, Activation
from tensorflow.keras.optimizers import Adam
In [2]:
#Create global variables
BATCH = 64
IMG_SIZE = (64,64)
LATENT_DIM = 128
basedir = pathlib.Path("gallery") 
imgdir = basedir / "plant"     
outputdir = basedir / "generated"
In [3]:
#Importing data
batch_s = int(BATCH/2)
#Import the data and resizing to 64x64 image 
data = tf.keras.preprocessing.image_dataset_from_directory(imgdir, label_mode = None, image_size = IMG_SIZE, batch_size = batch_s, smart_resize=True).map(lambda x: x /255.0)
Found 8189 files belonging to 1 classes.
In [4]:
#Display some example images

for sample in data:
    break

f,ax = plt.subplots(4,4,figsize=(15,15))
ax=ax.flatten()
for i in range(16):
    ax[i].imshow(sample[i])

CREATE GENERATOR / DISCRIMINATOR

In [5]:
def create_generator(latent_dim):
  generator=Sequential()
  generator.add(Dense(4*4*512,input_shape=[latent_dim]))
  generator.add(Reshape([4,4,512]))
  generator.add(Conv2DTranspose(128, kernel_size=4, strides=2, padding="same"))
  generator.add(LeakyReLU(alpha=0.2))
  generator.add(BatchNormalization())
  generator.add(Conv2DTranspose(256, kernel_size=4, strides=2, padding="same"))
  generator.add(LeakyReLU(alpha=0.2))
  generator.add(BatchNormalization())
  generator.add(Conv2DTranspose(512, kernel_size=4, strides=2, padding="same"))
  generator.add(LeakyReLU(alpha=0.2))
  generator.add(BatchNormalization())
  generator.add(Conv2DTranspose(3, kernel_size=4, strides=2, padding="same", activation='sigmoid'))
  return generator

generator = create_generator(LATENT_DIM)
generator.summary()
Model: "sequential"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 dense (Dense)               (None, 8192)              1056768   
                                                                 
 reshape (Reshape)           (None, 4, 4, 512)         0         
                                                                 
 conv2d_transpose (Conv2DTra  (None, 8, 8, 128)        1048704   
 nspose)                                                         
                                                                 
 leaky_re_lu (LeakyReLU)     (None, 8, 8, 128)         0         
                                                                 
 batch_normalization (BatchN  (None, 8, 8, 128)        512       
 ormalization)                                                   
                                                                 
 conv2d_transpose_1 (Conv2DT  (None, 16, 16, 256)      524544    
 ranspose)                                                       
                                                                 
 leaky_re_lu_1 (LeakyReLU)   (None, 16, 16, 256)       0         
                                                                 
 batch_normalization_1 (Batc  (None, 16, 16, 256)      1024      
 hNormalization)                                                 
                                                                 
 conv2d_transpose_2 (Conv2DT  (None, 32, 32, 512)      2097664   
 ranspose)                                                       
                                                                 
 leaky_re_lu_2 (LeakyReLU)   (None, 32, 32, 512)       0         
                                                                 
 batch_normalization_2 (Batc  (None, 32, 32, 512)      2048      
 hNormalization)                                                 
                                                                 
 conv2d_transpose_3 (Conv2DT  (None, 64, 64, 3)        24579     
 ranspose)                                                       
                                                                 
=================================================================
Total params: 4,755,843
Trainable params: 4,754,051
Non-trainable params: 1,792
_________________________________________________________________
In [6]:
def create_discriminator(input_shape):
  discriminator=Sequential()
  discriminator.add(Conv2D(64, kernel_size=4, strides=2, padding="same",input_shape=input_shape))
  discriminator.add(LeakyReLU(0.2))
  discriminator.add(BatchNormalization())
  discriminator.add(Conv2D(128, kernel_size=4, strides=2, padding="same"))
  discriminator.add(LeakyReLU(0.2))
  discriminator.add(BatchNormalization())
  discriminator.add(Conv2D(256, kernel_size=4, strides=2, padding="same"))
  discriminator.add(LeakyReLU(0.2))
  discriminator.add(Flatten())
  discriminator.add(Dropout(0.2))
  discriminator.add(Dense(1,activation='sigmoid'))
  return discriminator

input_shape = (64, 64, 3)
discriminator = create_discriminator(input_shape)
discriminator.summary()
Model: "sequential_1"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 conv2d (Conv2D)             (None, 32, 32, 64)        3136      
                                                                 
 leaky_re_lu_3 (LeakyReLU)   (None, 32, 32, 64)        0         
                                                                 
 batch_normalization_3 (Batc  (None, 32, 32, 64)       256       
 hNormalization)                                                 
                                                                 
 conv2d_1 (Conv2D)           (None, 16, 16, 128)       131200    
                                                                 
 leaky_re_lu_4 (LeakyReLU)   (None, 16, 16, 128)       0         
                                                                 
 batch_normalization_4 (Batc  (None, 16, 16, 128)      512       
 hNormalization)                                                 
                                                                 
 conv2d_2 (Conv2D)           (None, 8, 8, 256)         524544    
                                                                 
 leaky_re_lu_5 (LeakyReLU)   (None, 8, 8, 256)         0         
                                                                 
 flatten (Flatten)           (None, 16384)             0         
                                                                 
 dropout (Dropout)           (None, 16384)             0         
                                                                 
 dense_1 (Dense)             (None, 1)                 16385     
                                                                 
=================================================================
Total params: 676,033
Trainable params: 675,649
Non-trainable params: 384
_________________________________________________________________

GAN CREATION

In [11]:
class GAN(tf.keras.Model):
  def __init__(self, discriminator, generator, latent_dim):
    # Initialize the GAN model by calling the super class's constructor
    super(GAN, self).__init__()
    self.discriminator = discriminator
    self.generator = generator
    self.latent_dim = latent_dim

  def compile(self, d_optimizer, g_optimizer, loss_fn):
    # Compile the GAN model by calling the super class's compile method
    super(GAN, self).compile()
    self.d_optimizer = d_optimizer
    self.g_optimizer = g_optimizer
    self.loss_fn = loss_fn
    # Initialize metrics to track the losses of the discriminator and generator
    self.dloss = tf.keras.metrics.Mean(name="discriminator_loss")
    self.gloss = tf.keras.metrics.Mean(name="generator_loss")

  @property
  def metrics(self):
    # Return the list of metrics
    return [self.dloss, self.gloss]


  def train_step(self, real_images):
    # Get the batch size and generate noise with the same shape
    batch_size = tf.shape(real_images)[0]
    noise = tf.random.normal(shape=(batch_size, self.latent_dim))
    # Generate fake images
    generated_images = self.generator(noise)
    # Concatenate the fake and real images
    combined_images = tf.concat([generated_images, real_images], axis=0)
    # Create labels
    labels = tf.concat([tf.ones((batch_size, 1)), tf.zeros((batch_size, 1))], axis=0)
    # Add some random noise to the labels to make the training more robust
    labels += 0.05 * tf.random.uniform(tf.shape(labels))
    # Use the discriminator model to predict the probability of the images being real
    with tf.GradientTape() as tape:
      predictions = self.discriminator(combined_images)
      # Calculate the loss for the discriminator
      dloss = self.loss_fn(labels, predictions)
    # Calculate the gradients for the discriminator
    grads = tape.gradient(dloss, self.discriminator.trainable_weights)
    # Update the weights of the discriminator using the optimizer
    self.d_optimizer.apply_gradients(zip(grads, self.discriminator.trainable_weights))
    # Generate noise with the same shape as the batch
    noise = tf.random.normal(shape=(2*batch_size, self.latent_dim))
    # Create labels of all zeros
    labels = tf.zeros((2*batch_size, 1))
    # Use the discriminator model to predict the probability of the fake images being real
    with tf.GradientTape() as tape:
      predictions = self.discriminator(self.generator(noise))
      gloss = self.loss_fn(labels, predictions)
    # Calculate the gradients for the generator
    grads = tape.gradient(gloss, self.generator.trainable_weights)
    # Update the weights of the generator using the optimizer
    self.g_optimizer.apply_gradients(zip(grads, self.generator.trainable_weights))
    # Update the loss metrics with the new losses
    self.dloss.update_state(dloss)
    self.gloss.update_state(gloss)
    # Return a dictionary with the losses of the discriminator and generator
    return {"d_loss": self.dloss.result(), "g_loss": self.gloss.result()}  

TRAIN

In [10]:
def train(model, epoch_number):
  
  #Create callback to monitor progress of model by showing generated images
  class GANMonitor(tf.keras.callbacks.Callback):
    def __init__(self, num_img=3, latent_dim=LATENT_DIM):
        self.num_img = num_img
        self.latent_dim = latent_dim

    def on_epoch_end(self, epoch, logs=None):
        random_latent_vectors = tf.random.normal(shape=(self.num_img, self.latent_dim))
        generated_images = self.model.generator(random_latent_vectors)
        generated_images *= 255
        generated_images.numpy()
        for i in range(self.num_img):
            img = tf.keras.utils.array_to_img(generated_images[i])
            img.save(outputdir / f"generated_img_{epoch:03d}_{i}.png")

  #Code Partly Provided From Notebook 12.5

  #Fit model         
  return model.fit(data,
                    epochs = epoch_number,
                    batch_size = 512,
                    callbacks=GANMonitor(num_img=10, latent_dim=LATENT_DIM)) 

PLOT LOSS

In [6]:
def plot_loss():

    history_dict = history.history
    d_loss = history_dict['d_loss']
    g_loss = history_dict['g_loss']

    epochs = range(1, len(d_loss) + 1)

    blue_dots = 'bo'
    solid_blue_line = 'b'

    print('Lowest Generator Loss: ', np.argmin(g_loss))
    print('Highest Discriminator Loss: ', np.argmax(d_loss))

    plt.plot(epochs, d_loss, blue_dots, label = 'Discriminator Loss')
    plt.plot(epochs, g_loss, solid_blue_line, label = 'Generator Loss')
    plt.title('Discriminator and Generator loss')
    plt.xlabel('Epochs')
    plt.ylabel('Loss')
    plt.legend()
    plt.show()

SHOW IMAGES

In [7]:
def show_imgs(modelz):
    f,ax = plt.subplots(3,5,figsize=(15,10))
    ax = ax.flatten()
    arr = tf.random.normal(shape=(15, LATENT_DIM))
    generated_portraits = modelz.generator(arr)
    for i in range(15):
        g=generated_portraits[i]*255
        ax[i].imshow(tf.cast(g,tf.uint8))

SAVE / LOAD

In [8]:
#Save Paths
generator_path = basedir / "generator_flower_gan.h5"
discriminator_path = basedir / "discriminator_flower_gan.h5"

def save():
    discriminator.save_weights(discriminator_path)          # SAVE
    generator.save_weights(generator_path)

def load():
    discriminator_reloaded = create_discriminator((64, 64, 3)) # CREATE 
    generator_reloaded = create_generator(LATENT_DIM)

    discriminator_reloaded.load_weights(discriminator_path) # LOAD WEIGHTS
    generator_reloaded.load_weights(generator_path)

    gan_reloaded = GAN(                                     # REBUILD GAN
        discriminator=discriminator_reloaded,
        generator=generator_reloaded, 
        latent_dim=LATENT_DIM
    )
    return gan_reloaded

#Code provided in notebook 12.5
In [9]:
def make_model(optimrate):
    discriminator_opt = tf.keras.optimizers.Adam(optimrate,0.5)
    generator_opt = tf.keras.optimizers.Adam(optimrate,0.5)
    loss_fn = tf.keras.losses.BinaryCrossentropy()
    model = GAN(discriminator=discriminator, generator=generator, latent_dim=LATENT_DIM)
    model.compile(d_optimizer=discriminator_opt, g_optimizer=generator_opt, loss_fn=loss_fn)    
    return model

Experiment 1:

Tweaking Optimiser Adam Optimizer Learning Rate, 1.5e-5.

Training on 40 epoch (initial) followed by further 50 epoch

In [10]:
model = make_model(1.5e-5)
history = train(model,40)
Epoch 1/40
256/256 [==============================] - 40s 125ms/step - d_loss: 0.5583 - g_loss: 0.9802
Epoch 2/40
256/256 [==============================] - 29s 113ms/step - d_loss: 0.3590 - g_loss: 1.5348
Epoch 3/40
256/256 [==============================] - 30s 116ms/step - d_loss: 0.3804 - g_loss: 1.6628
Epoch 4/40
256/256 [==============================] - 29s 111ms/step - d_loss: 0.1945 - g_loss: 2.4467
Epoch 5/40
256/256 [==============================] - 28s 109ms/step - d_loss: 0.1654 - g_loss: 3.1566
Epoch 6/40
256/256 [==============================] - 29s 112ms/step - d_loss: 0.2316 - g_loss: 2.4135
Epoch 7/40
256/256 [==============================] - 30s 116ms/step - d_loss: 0.2805 - g_loss: 2.0149
Epoch 8/40
256/256 [==============================] - 28s 108ms/step - d_loss: 0.3301 - g_loss: 1.7659
Epoch 9/40
256/256 [==============================] - 28s 110ms/step - d_loss: 0.3881 - g_loss: 1.6489
Epoch 10/40
256/256 [==============================] - 28s 110ms/step - d_loss: 0.4590 - g_loss: 1.3642
Epoch 11/40
256/256 [==============================] - 28s 110ms/step - d_loss: 0.5642 - g_loss: 1.2427
Epoch 12/40
256/256 [==============================] - 30s 115ms/step - d_loss: 0.5553 - g_loss: 1.2591
Epoch 13/40
256/256 [==============================] - 29s 111ms/step - d_loss: 0.6360 - g_loss: 1.0523
Epoch 14/40
256/256 [==============================] - 29s 112ms/step - d_loss: 0.5885 - g_loss: 1.0275
Epoch 15/40
256/256 [==============================] - 27s 106ms/step - d_loss: 0.6209 - g_loss: 0.9716
Epoch 16/40
256/256 [==============================] - 28s 108ms/step - d_loss: 0.7185 - g_loss: 0.8541
Epoch 17/40
256/256 [==============================] - 28s 110ms/step - d_loss: 0.6781 - g_loss: 0.9846
Epoch 18/40
256/256 [==============================] - 28s 109ms/step - d_loss: 0.7487 - g_loss: 0.8535
Epoch 19/40
256/256 [==============================] - 32s 123ms/step - d_loss: 0.6636 - g_loss: 0.9208
Epoch 20/40
256/256 [==============================] - 30s 117ms/step - d_loss: 0.6573 - g_loss: 0.9046
Epoch 21/40
256/256 [==============================] - 31s 122ms/step - d_loss: 0.6673 - g_loss: 0.8203
Epoch 22/40
256/256 [==============================] - 29s 112ms/step - d_loss: 0.6571 - g_loss: 0.9728
Epoch 23/40
256/256 [==============================] - 26s 102ms/step - d_loss: 0.6523 - g_loss: 0.9218
Epoch 24/40
256/256 [==============================] - 25s 96ms/step - d_loss: 0.6625 - g_loss: 1.0147
Epoch 25/40
256/256 [==============================] - 25s 97ms/step - d_loss: 0.6520 - g_loss: 0.8721
Epoch 26/40
256/256 [==============================] - 27s 104ms/step - d_loss: 0.5925 - g_loss: 0.9834
Epoch 27/40
256/256 [==============================] - 28s 109ms/step - d_loss: 0.5881 - g_loss: 0.9687
Epoch 28/40
256/256 [==============================] - 28s 108ms/step - d_loss: 0.5908 - g_loss: 0.9775
Epoch 29/40
256/256 [==============================] - 30s 115ms/step - d_loss: 0.6127 - g_loss: 1.0085
Epoch 30/40
256/256 [==============================] - 32s 124ms/step - d_loss: 0.6279 - g_loss: 0.9226
Epoch 31/40
256/256 [==============================] - 30s 116ms/step - d_loss: 0.6518 - g_loss: 0.9712
Epoch 32/40
256/256 [==============================] - 28s 108ms/step - d_loss: 0.6944 - g_loss: 0.8141
Epoch 33/40
256/256 [==============================] - 27s 106ms/step - d_loss: 0.7003 - g_loss: 0.7937
Epoch 34/40
256/256 [==============================] - 27s 106ms/step - d_loss: 0.6823 - g_loss: 0.8640
Epoch 35/40
256/256 [==============================] - 28s 109ms/step - d_loss: 0.6424 - g_loss: 0.8935
Epoch 36/40
256/256 [==============================] - 28s 108ms/step - d_loss: 0.6399 - g_loss: 0.8943
Epoch 37/40
256/256 [==============================] - 28s 107ms/step - d_loss: 0.6404 - g_loss: 0.8319
Epoch 38/40
256/256 [==============================] - 30s 115ms/step - d_loss: 0.6531 - g_loss: 0.8293
Epoch 39/40
256/256 [==============================] - 28s 108ms/step - d_loss: 0.6526 - g_loss: 0.9012
Epoch 40/40
256/256 [==============================] - 29s 113ms/step - d_loss: 0.6621 - g_loss: 0.8642
In [25]:
plot_loss()
Lowest Generator Loss:  37
Highest Discriminator Loss:  17
In [35]:
show_imgs(model)

Retrain Further Epochs

In [36]:
discriminator_opt = tf.keras.optimizers.Adam(1.5e-5,0.5)
generator_opt = tf.keras.optimizers.Adam(1.5e-5,0.5)
loss_fn = tf.keras.losses.BinaryCrossentropy()
model = load()
model.compile(d_optimizer=discriminator_opt, g_optimizer=generator_opt, loss_fn=loss_fn)
In [37]:
history = train(model,50)
Epoch 1/50
256/256 [==============================] - 29s 110ms/step - d_loss: 0.6829 - g_loss: 0.8053
Epoch 2/50
256/256 [==============================] - 25s 97ms/step - d_loss: 0.6807 - g_loss: 0.7866
Epoch 3/50
256/256 [==============================] - 25s 97ms/step - d_loss: 0.6755 - g_loss: 0.7899
Epoch 4/50
256/256 [==============================] - 28s 108ms/step - d_loss: 0.6755 - g_loss: 0.7965
Epoch 5/50
256/256 [==============================] - 25s 98ms/step - d_loss: 0.6829 - g_loss: 0.7960
Epoch 6/50
256/256 [==============================] - 28s 109ms/step - d_loss: 0.6729 - g_loss: 0.7855
Epoch 7/50
256/256 [==============================] - 29s 114ms/step - d_loss: 0.6804 - g_loss: 0.7822
Epoch 8/50
256/256 [==============================] - 30s 117ms/step - d_loss: 0.6790 - g_loss: 0.8056
Epoch 9/50
256/256 [==============================] - 31s 120ms/step - d_loss: 0.6851 - g_loss: 0.7840
Epoch 10/50
256/256 [==============================] - 30s 116ms/step - d_loss: 0.6774 - g_loss: 0.7929
Epoch 11/50
256/256 [==============================] - 30s 116ms/step - d_loss: 0.6849 - g_loss: 0.7835
Epoch 12/50
256/256 [==============================] - 32s 125ms/step - d_loss: 0.6807 - g_loss: 0.7840
Epoch 13/50
256/256 [==============================] - 32s 123ms/step - d_loss: 0.6880 - g_loss: 0.7794
Epoch 14/50
256/256 [==============================] - 30s 118ms/step - d_loss: 0.6860 - g_loss: 0.7820
Epoch 15/50
256/256 [==============================] - 29s 111ms/step - d_loss: 0.6864 - g_loss: 0.7824
Epoch 16/50
256/256 [==============================] - 34s 134ms/step - d_loss: 0.6916 - g_loss: 0.7842
Epoch 17/50
256/256 [==============================] - 32s 125ms/step - d_loss: 0.6893 - g_loss: 0.7743
Epoch 18/50
256/256 [==============================] - 33s 127ms/step - d_loss: 0.6826 - g_loss: 0.7767
Epoch 19/50
256/256 [==============================] - 31s 121ms/step - d_loss: 0.6850 - g_loss: 0.7781
Epoch 20/50
256/256 [==============================] - 31s 121ms/step - d_loss: 0.6893 - g_loss: 0.7698
Epoch 21/50
256/256 [==============================] - 27s 105ms/step - d_loss: 0.6928 - g_loss: 0.7661
Epoch 22/50
256/256 [==============================] - 28s 108ms/step - d_loss: 0.6941 - g_loss: 0.7743
Epoch 23/50
256/256 [==============================] - 28s 110ms/step - d_loss: 0.6891 - g_loss: 0.7775
Epoch 24/50
256/256 [==============================] - 27s 105ms/step - d_loss: 0.6877 - g_loss: 0.7691
Epoch 25/50
256/256 [==============================] - 28s 110ms/step - d_loss: 0.6865 - g_loss: 0.7701
Epoch 26/50
256/256 [==============================] - 28s 110ms/step - d_loss: 0.6948 - g_loss: 0.7628
Epoch 27/50
256/256 [==============================] - 28s 111ms/step - d_loss: 0.6892 - g_loss: 0.7683
Epoch 28/50
256/256 [==============================] - 30s 115ms/step - d_loss: 0.6910 - g_loss: 0.7666
Epoch 29/50
256/256 [==============================] - 28s 110ms/step - d_loss: 0.6962 - g_loss: 0.7706
Epoch 30/50
256/256 [==============================] - 28s 109ms/step - d_loss: 0.6906 - g_loss: 0.7599
Epoch 31/50
256/256 [==============================] - 29s 113ms/step - d_loss: 0.6961 - g_loss: 0.7616
Epoch 32/50
256/256 [==============================] - 26s 101ms/step - d_loss: 0.6917 - g_loss: 0.7603
Epoch 33/50
256/256 [==============================] - 26s 101ms/step - d_loss: 0.6912 - g_loss: 0.7603
Epoch 34/50
256/256 [==============================] - 25s 98ms/step - d_loss: 0.6957 - g_loss: 0.7650
Epoch 35/50
256/256 [==============================] - 26s 100ms/step - d_loss: 0.6953 - g_loss: 0.7565
Epoch 36/50
256/256 [==============================] - 25s 98ms/step - d_loss: 0.6951 - g_loss: 0.7635
Epoch 37/50
256/256 [==============================] - 25s 98ms/step - d_loss: 0.6947 - g_loss: 0.7647
Epoch 38/50
256/256 [==============================] - 25s 99ms/step - d_loss: 0.6934 - g_loss: 0.7591
Epoch 39/50
256/256 [==============================] - 26s 100ms/step - d_loss: 0.6899 - g_loss: 0.7699
Epoch 40/50
256/256 [==============================] - 25s 98ms/step - d_loss: 0.6873 - g_loss: 0.7605
Epoch 41/50
256/256 [==============================] - 26s 100ms/step - d_loss: 0.6903 - g_loss: 0.7693
Epoch 42/50
256/256 [==============================] - 25s 98ms/step - d_loss: 0.6873 - g_loss: 0.7714
Epoch 43/50
256/256 [==============================] - 26s 100ms/step - d_loss: 0.6900 - g_loss: 0.7663
Epoch 44/50
256/256 [==============================] - 25s 97ms/step - d_loss: 0.6918 - g_loss: 0.7644
Epoch 45/50
256/256 [==============================] - 25s 98ms/step - d_loss: 0.6867 - g_loss: 0.7795
Epoch 46/50
256/256 [==============================] - 25s 97ms/step - d_loss: 0.6922 - g_loss: 0.7680
Epoch 47/50
256/256 [==============================] - 25s 98ms/step - d_loss: 0.6844 - g_loss: 0.7706
Epoch 48/50
256/256 [==============================] - 25s 96ms/step - d_loss: 0.6794 - g_loss: 0.7805
Epoch 49/50
256/256 [==============================] - 26s 101ms/step - d_loss: 0.6874 - g_loss: 0.7659
Epoch 50/50
256/256 [==============================] - 26s 100ms/step - d_loss: 0.6869 - g_loss: 0.7696

Model 1 Output

In [50]:
plot_loss()
show_imgs(model)
Lowest Generator Loss:  32
Highest Discriminator Loss:  34
In [48]:
discriminator.save_weights(discriminator_path)          # SAVE
generator.save_weights(generator_path)

Result Experiment 1:

Training on 40 epoch (initial)

Results: | Epoch | Discriminator Loss | Generator Loss | | --- | --- | --- | | 1 | 0.5583 | 0.9802 | | 2 | 0.3590 | 1.5348 | | 3 | 0.3804 | 1.6628 | | 4 | 0.1945 | 2.4467 | | 5 | 0.1654 | 3.1566 | | 6 | 0.2316 | 2.4135 | | 7 | 0.2805 | 2.0149 | | 8 | 0.3301 | 1.7659 | | 9 | 0.3881 | 1.6489 | | 10 | 0.4590 | 1.3642 | | 30 | 0.6279 | 0.9226 | | 31 | 0.6518 | 0.9712 | | 32 | 0.6944 | 0.8141 | | 33 | 0.7003 | 0.7937 | | 34 | 0.6823 | 0.8640 | | 35 | 0.6424 | 0.8935 | | 36 | 0.6399 | 0.8943 | | 37 | 0.6404 | 0.8319 | | 38 | 0.6531 | 0.8293 | | 39 | 0.6526 | 0.9012 | | 40 | 0.6621 | 0.8642 |

Further Training 50 Total Epoch: Results: | Epoch | Discriminator Loss | Generator Loss | | --- | --- | --- | | 1 | 0.6829 | 0.9802 | | 2 | 0.6807 | 0.7866 | | 3 | 0.6755 | 0.7899 | | 4 | 0.6755 | 0.7965 | | 5 | 0.6829 | 0.7960 | | 6 | 0.6729 | 0.7855 | | 7 | 0.6804 | 0.7822 | | 8 | 0.6790 | 0.8056 | | 9 | 0.6851 | 0.7840 | | 10 | 0.6774 | 0.7929 | | 40 | 0.6873 | 0.7605 | | 41 | 0.6903 | 0.7693 | | 42 | 0.6873 | 0.7714 | | 43 | 0.6900 | 0.7663 | | 44 | 0.6918 | 0.7644 | | 45 | 0.6867 | 0.7795 | | 46 | 0.6922 | 0.7680 | | 47 | 0.6844 | 0.7706 | | 48 | 0.6794 | 0.7805 | | 49 | 0.6874 | 0.7659 | | 50 | 0.6869 | 0.7696 |

The final model performance had a discriminator loss of 0.6869, which is a difference of 0.1286 from the first epoch of the initial run. On the other hand, the generator had a final value of 0.7696, with a difference of -0.2106. The discriminator loss remained relatively consistent between the final epochs, starting from the 30th to the 90th. The generator loss, however, gradually decreased across the range of epochs, indicating that further improvement is possible, albeit marginal. The final images produced were able to capture traits and patterns from the flower dataset, with many examples displaying realistic shapes and colors. A clear difference in detail can be observed between the initial image results and the final images.

Experiment 2:

Tweaking Optimiser Adam Optimizer Learning Rate, 2e-5.

Training on 40 epoch (initial) followed by further 50 epoch

In [13]:
model = make_model(2e-5)
history = train(model,40)
Epoch 1/40
256/256 [==============================] - 37s 116ms/step - d_loss: 0.5402 - g_loss: 1.0413
Epoch 2/40
256/256 [==============================] - 27s 104ms/step - d_loss: 0.3876 - g_loss: 1.4123
Epoch 3/40
256/256 [==============================] - 28s 109ms/step - d_loss: 0.1702 - g_loss: 2.4356
Epoch 4/40
256/256 [==============================] - 27s 104ms/step - d_loss: 0.1549 - g_loss: 3.4074
Epoch 5/40
256/256 [==============================] - 27s 105ms/step - d_loss: 0.1523 - g_loss: 2.8311
Epoch 6/40
256/256 [==============================] - 28s 107ms/step - d_loss: 0.1884 - g_loss: 2.6452
Epoch 7/40
256/256 [==============================] - 29s 112ms/step - d_loss: 0.3844 - g_loss: 2.5068
Epoch 8/40
256/256 [==============================] - 27s 105ms/step - d_loss: 0.4915 - g_loss: 1.5771
Epoch 9/40
256/256 [==============================] - 29s 112ms/step - d_loss: 0.4941 - g_loss: 1.2908
Epoch 10/40
256/256 [==============================] - 27s 104ms/step - d_loss: 0.5189 - g_loss: 1.2442
Epoch 11/40
256/256 [==============================] - 27s 104ms/step - d_loss: 0.6084 - g_loss: 1.0579
Epoch 12/40
256/256 [==============================] - 30s 115ms/step - d_loss: 0.6039 - g_loss: 0.9418
Epoch 13/40
256/256 [==============================] - 28s 110ms/step - d_loss: 0.6128 - g_loss: 0.9995
Epoch 14/40
256/256 [==============================] - 29s 112ms/step - d_loss: 0.6222 - g_loss: 0.9049
Epoch 15/40
256/256 [==============================] - 29s 113ms/step - d_loss: 0.6352 - g_loss: 0.9423
Epoch 16/40
256/256 [==============================] - 28s 109ms/step - d_loss: 0.6620 - g_loss: 0.9499
Epoch 17/40
256/256 [==============================] - 28s 110ms/step - d_loss: 0.6529 - g_loss: 0.8788
Epoch 18/40
256/256 [==============================] - 29s 112ms/step - d_loss: 0.6381 - g_loss: 0.8997
Epoch 19/40
256/256 [==============================] - 29s 112ms/step - d_loss: 0.6592 - g_loss: 0.9259
Epoch 20/40
256/256 [==============================] - 28s 110ms/step - d_loss: 0.6617 - g_loss: 0.9531
Epoch 21/40
256/256 [==============================] - 28s 110ms/step - d_loss: 0.6161 - g_loss: 0.9178
Epoch 22/40
256/256 [==============================] - 27s 106ms/step - d_loss: 0.6494 - g_loss: 0.8919
Epoch 23/40
256/256 [==============================] - 30s 117ms/step - d_loss: 0.6577 - g_loss: 0.8479
Epoch 24/40
256/256 [==============================] - 29s 113ms/step - d_loss: 0.6756 - g_loss: 0.8772
Epoch 25/40
256/256 [==============================] - 28s 110ms/step - d_loss: 0.6551 - g_loss: 0.8695
Epoch 26/40
256/256 [==============================] - 28s 111ms/step - d_loss: 0.6140 - g_loss: 0.9021
Epoch 27/40
256/256 [==============================] - 28s 110ms/step - d_loss: 0.6273 - g_loss: 1.0091
Epoch 28/40
256/256 [==============================] - 28s 108ms/step - d_loss: 0.6344 - g_loss: 0.9003
Epoch 29/40
256/256 [==============================] - 28s 110ms/step - d_loss: 0.6381 - g_loss: 0.9262
Epoch 30/40
256/256 [==============================] - 28s 110ms/step - d_loss: 0.6476 - g_loss: 0.8159
Epoch 31/40
256/256 [==============================] - 28s 110ms/step - d_loss: 0.6474 - g_loss: 0.9005
Epoch 32/40
256/256 [==============================] - 28s 110ms/step - d_loss: 0.6549 - g_loss: 0.8438
Epoch 33/40
256/256 [==============================] - 28s 108ms/step - d_loss: 0.6662 - g_loss: 0.8448
Epoch 34/40
256/256 [==============================] - 25s 99ms/step - d_loss: 0.6718 - g_loss: 0.8160
Epoch 35/40
256/256 [==============================] - 25s 99ms/step - d_loss: 0.6744 - g_loss: 0.8023
Epoch 36/40
256/256 [==============================] - 25s 99ms/step - d_loss: 0.6832 - g_loss: 0.7782
Epoch 37/40
256/256 [==============================] - 26s 102ms/step - d_loss: 0.6778 - g_loss: 0.7963
Epoch 38/40
256/256 [==============================] - 28s 108ms/step - d_loss: 0.6732 - g_loss: 0.8105
Epoch 39/40
256/256 [==============================] - 27s 106ms/step - d_loss: 0.6742 - g_loss: 0.7923
Epoch 40/40
256/256 [==============================] - 29s 114ms/step - d_loss: 0.6885 - g_loss: 0.8073
In [14]:
plot_loss()
show_imgs(model)
Lowest Generator Loss:  35
Highest Discriminator Loss:  35
In [14]:
#Change Save Paths
generator_path = basedir / "generator_flower_gan_model2.h5"
discriminator_path = basedir / "discriminator_flower_gan_model2.h5"
In [ ]:
save()
In [17]:
discriminator_opt = tf.keras.optimizers.Adam(2e-5,0.5)
generator_opt = tf.keras.optimizers.Adam(2e-5,0.5)
loss_fn = tf.keras.losses.BinaryCrossentropy()
model = load()
model.compile(d_optimizer=discriminator_opt, g_optimizer=generator_opt, loss_fn=loss_fn)

Model 2 Retrain + Results

In [18]:
history = train(model,50)
plot_loss()
show_imgs(model)
Epoch 1/50
256/256 [==============================] - 39s 111ms/step - d_loss: 0.6829 - g_loss: 0.8087
Epoch 2/50
256/256 [==============================] - 24s 95ms/step - d_loss: 0.6750 - g_loss: 0.8094
Epoch 3/50
256/256 [==============================] - 25s 97ms/step - d_loss: 0.6883 - g_loss: 0.7866
Epoch 4/50
256/256 [==============================] - 25s 99ms/step - d_loss: 0.6728 - g_loss: 0.8005
Epoch 5/50
256/256 [==============================] - 25s 98ms/step - d_loss: 0.6716 - g_loss: 0.8169
Epoch 6/50
256/256 [==============================] - 25s 99ms/step - d_loss: 0.6601 - g_loss: 0.8255
Epoch 7/50
256/256 [==============================] - 26s 100ms/step - d_loss: 0.6730 - g_loss: 0.8109
Epoch 8/50
256/256 [==============================] - 25s 97ms/step - d_loss: 0.6815 - g_loss: 0.7924
Epoch 9/50
256/256 [==============================] - 25s 98ms/step - d_loss: 0.6710 - g_loss: 0.7963
Epoch 10/50
256/256 [==============================] - 26s 99ms/step - d_loss: 0.6798 - g_loss: 0.7979
Epoch 11/50
256/256 [==============================] - 26s 101ms/step - d_loss: 0.6747 - g_loss: 0.8018
Epoch 12/50
256/256 [==============================] - 26s 100ms/step - d_loss: 0.6705 - g_loss: 0.7964
Epoch 13/50
256/256 [==============================] - 25s 98ms/step - d_loss: 0.6753 - g_loss: 0.7993
Epoch 14/50
256/256 [==============================] - 25s 98ms/step - d_loss: 0.6828 - g_loss: 0.7880
Epoch 15/50
256/256 [==============================] - 25s 98ms/step - d_loss: 0.6727 - g_loss: 0.8230
Epoch 16/50
256/256 [==============================] - 25s 98ms/step - d_loss: 0.6701 - g_loss: 0.8124
Epoch 17/50
256/256 [==============================] - 25s 96ms/step - d_loss: 0.6726 - g_loss: 0.7996
Epoch 18/50
256/256 [==============================] - 25s 96ms/step - d_loss: 0.6795 - g_loss: 0.8076
Epoch 19/50
256/256 [==============================] - 25s 96ms/step - d_loss: 0.6690 - g_loss: 0.8056
Epoch 20/50
256/256 [==============================] - 25s 96ms/step - d_loss: 0.6752 - g_loss: 0.8036
Epoch 21/50
256/256 [==============================] - 25s 96ms/step - d_loss: 0.6583 - g_loss: 0.8300
Epoch 22/50
256/256 [==============================] - 25s 96ms/step - d_loss: 0.6696 - g_loss: 0.8057
Epoch 23/50
256/256 [==============================] - 25s 96ms/step - d_loss: 0.6792 - g_loss: 0.8116
Epoch 24/50
256/256 [==============================] - 25s 96ms/step - d_loss: 0.6730 - g_loss: 0.8250
Epoch 25/50
256/256 [==============================] - 25s 99ms/step - d_loss: 0.6709 - g_loss: 0.8191
Epoch 26/50
256/256 [==============================] - 28s 109ms/step - d_loss: 0.6641 - g_loss: 0.8073
Epoch 27/50
256/256 [==============================] - 26s 103ms/step - d_loss: 0.6790 - g_loss: 0.7968
Epoch 28/50
256/256 [==============================] - 26s 100ms/step - d_loss: 0.6828 - g_loss: 0.8005
Epoch 29/50
256/256 [==============================] - 26s 99ms/step - d_loss: 0.6704 - g_loss: 0.8007
Epoch 30/50
256/256 [==============================] - 25s 97ms/step - d_loss: 0.6748 - g_loss: 0.7983
Epoch 31/50
256/256 [==============================] - 25s 96ms/step - d_loss: 0.6741 - g_loss: 0.8076
Epoch 32/50
256/256 [==============================] - 25s 96ms/step - d_loss: 0.6833 - g_loss: 0.7910
Epoch 33/50
256/256 [==============================] - 25s 96ms/step - d_loss: 0.6768 - g_loss: 0.7916
Epoch 34/50
256/256 [==============================] - 26s 101ms/step - d_loss: 0.6858 - g_loss: 0.7820
Epoch 35/50
256/256 [==============================] - 28s 107ms/step - d_loss: 0.6851 - g_loss: 0.7806
Epoch 36/50
256/256 [==============================] - 27s 104ms/step - d_loss: 0.6816 - g_loss: 0.7872
Epoch 37/50
256/256 [==============================] - 25s 97ms/step - d_loss: 0.6869 - g_loss: 0.7791
Epoch 38/50
256/256 [==============================] - 25s 98ms/step - d_loss: 0.6818 - g_loss: 0.7802
Epoch 39/50
256/256 [==============================] - 25s 97ms/step - d_loss: 0.6852 - g_loss: 0.7845
Epoch 40/50
256/256 [==============================] - 25s 98ms/step - d_loss: 0.6814 - g_loss: 0.7824
Epoch 41/50
256/256 [==============================] - 25s 97ms/step - d_loss: 0.6860 - g_loss: 0.7826
Epoch 42/50
256/256 [==============================] - 25s 98ms/step - d_loss: 0.6820 - g_loss: 0.7790
Epoch 43/50
256/256 [==============================] - 25s 97ms/step - d_loss: 0.6841 - g_loss: 0.7813
Epoch 44/50
256/256 [==============================] - 25s 96ms/step - d_loss: 0.6764 - g_loss: 0.7766
Epoch 45/50
256/256 [==============================] - 28s 109ms/step - d_loss: 0.6807 - g_loss: 0.7845
Epoch 46/50
256/256 [==============================] - 27s 105ms/step - d_loss: 0.6758 - g_loss: 0.7878
Epoch 47/50
256/256 [==============================] - 26s 99ms/step - d_loss: 0.6818 - g_loss: 0.7972
Epoch 48/50
256/256 [==============================] - 26s 102ms/step - d_loss: 0.6816 - g_loss: 0.7826
Epoch 49/50
256/256 [==============================] - 26s 99ms/step - d_loss: 0.6738 - g_loss: 0.7846
Epoch 50/50
256/256 [==============================] - 25s 98ms/step - d_loss: 0.6764 - g_loss: 0.7848
Lowest Generator Loss:  39
Highest Discriminator Loss:  35

Result Experiment 2

Results: | Epoch | Discriminator Loss | Generator Loss | | --- | --- | --- | | 1 | 0.5402 | 1.0413 | | 2 | 0.3876 | 1.4123 | | 3 | 0.1702 | 2.4356 | | 4 | 0.1549 | 3.4074 | | 5 | 0.1523 | 2.8311 | | 6 | 0.1884 | 2.6452 | | 7 | 0.3844 | 2.5068 | | 8 | 0.4915 | 1.5771 | | 9 | 0.4941 | 1.2908 | | 10 | 0.5189 | 1.2442 | | 30 | 0.6476 | 0.8159 | | 31 | 0.6474 | 0.9005 | | 32 | 0.6549 | 0.8438 | | 33 | 0.6662 | 0.8448 | | 34 | 0.6718 | 0.8160 | | 35 | 0.6744 | 0.8023 | | 36 | 0.6832 | 0.7782 | | 37 | 0.6778 | 0.7963 | | 38 | 0.6732 | 0.8105 | | 39 | 0.6742 | 0.7923 | | 40 | 0.6885 | 0.8073 |

Further Training 50 Total Epoch:

Results: | Epoch | Discriminator Loss | Generator Loss | | --- | --- | --- | | 1 | 0.6829 | 0.8087 | | 2 | 0.6750 | 0.8094 | | 3 | 0.6883 | 0.7866 | | 4 | 0.6728 | 0.8005 | | 5 | 0.6716 | 0.8169 | | 6 | 0.6601 | 0.8255 | | 7 | 0.6730 | 0.8109 | | 8 | 0.6815 | 0.7924 | | 9 | 0.6710 | 0.7963 | | 10 | 0.6798 | 0.7979 | | 40 | 0.6814 | 0.7824 | | 41 | 0.6860 | 0.7826 | | 42 | 0.6820 | 0.7790 | | 43 | 0.6841 | 0.7813 | | 44 | 0.6764 | 0.7766 | | 45 | 0.6807 | 0.7845 | | 46 | 0.6758 | 0.7878 | | 47 | 0.6818 | 0.7972 | | 48 | 0.6794 | 0.7826 | | 49 | 0.6738 | 0.7846 | | 50 | 0.6764 | 0.7848 |

The results of the training show that the discriminator loss of model 2 decreased from 0.5402 in the first epoch to 0.6764 in the final epoch, with a difference of 0.1362. The generator loss, on the other hand, had a larger difference compared to model 1, with a difference of -0.2565. The final results of the initial training (up to 40 epochs) were similar to the final training epoch, with only a 0.02 difference. This suggests that the model is able to achieve similar statistical results without the need for further training, resulting in a reduction in computational expense. However, there is a notable difference in the quality of the images produced by the further trained model, with a clearer image, more vibrancy, and more detail. The next experiment will involve increasing the model's capacity in an attempt to decrease and increase the loss of the GAN model at the current learning rate.

In [19]:
save()

Experiment 3:

Increase model capacity by doubling units of every Conv Layer from initial model by 2x. The aim is that by increasing capacity, the model will be able to extract further basic texture features within the lower layers and provide a increased definition on the produced images.

In [13]:
#Redefine the generator and discriminator

def create_generator(latent_dim):
  generator=Sequential()
  generator.add(Dense(4*4*512,input_shape=[latent_dim]))
  generator.add(Reshape([4,4,512]))
  generator.add(Conv2DTranspose(256, kernel_size=4, strides=2, padding="same"))
  generator.add(LeakyReLU(alpha=0.2))
  generator.add(BatchNormalization())
  generator.add(Conv2DTranspose(512, kernel_size=4, strides=2, padding="same"))
  generator.add(LeakyReLU(alpha=0.2))
  generator.add(BatchNormalization())
  generator.add(Conv2DTranspose(1024, kernel_size=4, strides=2, padding="same"))
  generator.add(LeakyReLU(alpha=0.2))
  generator.add(BatchNormalization())
  generator.add(Conv2DTranspose(3, kernel_size=4, strides=2, padding="same", activation='sigmoid'))
  return generator

generator = create_generator(LATENT_DIM)
generator.summary()


def create_discriminator(input_shape):
  discriminator=Sequential()
  discriminator.add(Conv2D(128, kernel_size=4, strides=2, padding="same",input_shape=input_shape))
  discriminator.add(LeakyReLU(0.2))
  discriminator.add(BatchNormalization())
  discriminator.add(Conv2D(256, kernel_size=4, strides=2, padding="same"))
  discriminator.add(LeakyReLU(0.2))
  discriminator.add(BatchNormalization())
  discriminator.add(Conv2D(512, kernel_size=4, strides=2, padding="same"))
  discriminator.add(LeakyReLU(0.2))
  discriminator.add(Flatten())
  discriminator.add(Dropout(0.2))
  discriminator.add(Dense(1,activation='sigmoid'))
  return discriminator

input_shape = (64, 64, 3)
discriminator = create_discriminator(input_shape)
discriminator.summary()
Model: "sequential_2"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 dense_2 (Dense)             (None, 8192)              1056768   
                                                                 
 reshape_1 (Reshape)         (None, 4, 4, 512)         0         
                                                                 
 conv2d_transpose_4 (Conv2DT  (None, 8, 8, 256)        2097408   
 ranspose)                                                       
                                                                 
 leaky_re_lu_6 (LeakyReLU)   (None, 8, 8, 256)         0         
                                                                 
 batch_normalization_5 (Batc  (None, 8, 8, 256)        1024      
 hNormalization)                                                 
                                                                 
 conv2d_transpose_5 (Conv2DT  (None, 16, 16, 512)      2097664   
 ranspose)                                                       
                                                                 
 leaky_re_lu_7 (LeakyReLU)   (None, 16, 16, 512)       0         
                                                                 
 batch_normalization_6 (Batc  (None, 16, 16, 512)      2048      
 hNormalization)                                                 
                                                                 
 conv2d_transpose_6 (Conv2DT  (None, 32, 32, 1024)     8389632   
 ranspose)                                                       
                                                                 
 leaky_re_lu_8 (LeakyReLU)   (None, 32, 32, 1024)      0         
                                                                 
 batch_normalization_7 (Batc  (None, 32, 32, 1024)     4096      
 hNormalization)                                                 
                                                                 
 conv2d_transpose_7 (Conv2DT  (None, 64, 64, 3)        49155     
 ranspose)                                                       
                                                                 
=================================================================
Total params: 13,697,795
Trainable params: 13,694,211
Non-trainable params: 3,584
_________________________________________________________________
Model: "sequential_3"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 conv2d_3 (Conv2D)           (None, 32, 32, 128)       6272      
                                                                 
 leaky_re_lu_9 (LeakyReLU)   (None, 32, 32, 128)       0         
                                                                 
 batch_normalization_8 (Batc  (None, 32, 32, 128)      512       
 hNormalization)                                                 
                                                                 
 conv2d_4 (Conv2D)           (None, 16, 16, 256)       524544    
                                                                 
 leaky_re_lu_10 (LeakyReLU)  (None, 16, 16, 256)       0         
                                                                 
 batch_normalization_9 (Batc  (None, 16, 16, 256)      1024      
 hNormalization)                                                 
                                                                 
 conv2d_5 (Conv2D)           (None, 8, 8, 512)         2097664   
                                                                 
 leaky_re_lu_11 (LeakyReLU)  (None, 8, 8, 512)         0         
                                                                 
 flatten_1 (Flatten)         (None, 32768)             0         
                                                                 
 dropout_1 (Dropout)         (None, 32768)             0         
                                                                 
 dense_3 (Dense)             (None, 1)                 32769     
                                                                 
=================================================================
Total params: 2,662,785
Trainable params: 2,662,017
Non-trainable params: 768
_________________________________________________________________
In [15]:
model = make_model(2e-5)
history = train(model,40)
Epoch 1/40
256/256 [==============================] - 117s 380ms/step - d_loss: 0.4755 - g_loss: 1.2382
Epoch 2/40
256/256 [==============================] - 87s 339ms/step - d_loss: 0.3138 - g_loss: 2.0798
Epoch 3/40
256/256 [==============================] - 84s 328ms/step - d_loss: 0.1843 - g_loss: 3.1849
Epoch 4/40
256/256 [==============================] - 80s 310ms/step - d_loss: 0.3833 - g_loss: 1.8323
Epoch 5/40
256/256 [==============================] - 86s 335ms/step - d_loss: 0.6230 - g_loss: 1.2561
Epoch 6/40
256/256 [==============================] - 84s 329ms/step - d_loss: 0.6024 - g_loss: 0.9125
Epoch 7/40
256/256 [==============================] - 82s 319ms/step - d_loss: 0.5794 - g_loss: 1.0844
Epoch 8/40
256/256 [==============================] - 82s 319ms/step - d_loss: 0.6364 - g_loss: 1.0020
Epoch 9/40
256/256 [==============================] - 82s 319ms/step - d_loss: 0.6194 - g_loss: 0.9874
Epoch 10/40
256/256 [==============================] - 82s 319ms/step - d_loss: 0.6355 - g_loss: 0.9680
Epoch 11/40
256/256 [==============================] - 82s 319ms/step - d_loss: 0.5934 - g_loss: 1.0034
Epoch 12/40
256/256 [==============================] - 82s 319ms/step - d_loss: 0.5202 - g_loss: 1.1914
Epoch 13/40
256/256 [==============================] - 82s 319ms/step - d_loss: 0.6451 - g_loss: 1.0040
Epoch 14/40
256/256 [==============================] - 82s 319ms/step - d_loss: 0.6434 - g_loss: 0.9757
Epoch 15/40
256/256 [==============================] - 82s 322ms/step - d_loss: 0.6198 - g_loss: 0.9862
Epoch 16/40
256/256 [==============================] - 83s 325ms/step - d_loss: 0.6188 - g_loss: 0.9842
Epoch 17/40
256/256 [==============================] - 86s 336ms/step - d_loss: 0.6048 - g_loss: 0.9729
Epoch 18/40
256/256 [==============================] - 91s 356ms/step - d_loss: 0.6313 - g_loss: 1.0011
Epoch 19/40
256/256 [==============================] - 89s 347ms/step - d_loss: 0.6355 - g_loss: 0.9388
Epoch 20/40
256/256 [==============================] - 88s 344ms/step - d_loss: 0.6133 - g_loss: 1.0454
Epoch 21/40
256/256 [==============================] - 88s 344ms/step - d_loss: 0.5718 - g_loss: 1.0072
Epoch 22/40
256/256 [==============================] - 89s 349ms/step - d_loss: 0.6315 - g_loss: 1.0056
Epoch 23/40
256/256 [==============================] - 87s 339ms/step - d_loss: 0.6155 - g_loss: 0.9385
Epoch 24/40
256/256 [==============================] - 79s 307ms/step - d_loss: 0.6048 - g_loss: 0.9669
Epoch 25/40
256/256 [==============================] - 81s 317ms/step - d_loss: 0.6352 - g_loss: 0.9189
Epoch 26/40
256/256 [==============================] - 79s 308ms/step - d_loss: 0.5898 - g_loss: 0.9584
Epoch 27/40
256/256 [==============================] - 80s 313ms/step - d_loss: 0.6077 - g_loss: 0.9411
Epoch 28/40
256/256 [==============================] - 83s 322ms/step - d_loss: 0.6409 - g_loss: 0.9283
Epoch 29/40
256/256 [==============================] - 97s 377ms/step - d_loss: 0.6289 - g_loss: 0.8920
Epoch 30/40
256/256 [==============================] - 110s 429ms/step - d_loss: 0.6664 - g_loss: 0.8517
Epoch 31/40
256/256 [==============================] - 102s 397ms/step - d_loss: 0.6521 - g_loss: 0.8606
Epoch 32/40
256/256 [==============================] - 77s 301ms/step - d_loss: 0.6343 - g_loss: 0.8891
Epoch 33/40
256/256 [==============================] - 90s 353ms/step - d_loss: 0.6531 - g_loss: 0.8717
Epoch 34/40
256/256 [==============================] - 84s 329ms/step - d_loss: 0.6512 - g_loss: 0.8869
Epoch 35/40
256/256 [==============================] - 84s 328ms/step - d_loss: 0.6592 - g_loss: 0.8534
Epoch 36/40
256/256 [==============================] - 94s 367ms/step - d_loss: 0.6557 - g_loss: 0.8633
Epoch 37/40
256/256 [==============================] - 95s 371ms/step - d_loss: 0.6461 - g_loss: 0.8818
Epoch 38/40
256/256 [==============================] - 93s 364ms/step - d_loss: 0.6459 - g_loss: 0.8761
Epoch 39/40
256/256 [==============================] - 90s 352ms/step - d_loss: 0.6531 - g_loss: 0.8811
Epoch 40/40
256/256 [==============================] - 90s 353ms/step - d_loss: 0.6451 - g_loss: 0.8707
In [16]:
plot_loss()
show_imgs(model)
Lowest Generator Loss:  29
Highest Discriminator Loss:  29
In [17]:
#Change Save Paths
generator_path = basedir / "generator_flower_gan_model3.h5"
discriminator_path = basedir / "discriminator_flower_gan_model3.h5"
In [18]:
save()
In [24]:
history = train(model,50)
Epoch 1/50
256/256 [==============================] - 99s 386ms/step - d_loss: 0.6408 - g_loss: 0.8936
Epoch 2/50
256/256 [==============================] - 95s 370ms/step - d_loss: 0.6446 - g_loss: 0.9056
Epoch 3/50
256/256 [==============================] - 97s 377ms/step - d_loss: 0.6472 - g_loss: 0.8713
Epoch 4/50
256/256 [==============================] - 76s 296ms/step - d_loss: 0.6455 - g_loss: 0.8673
Epoch 5/50
256/256 [==============================] - 76s 296ms/step - d_loss: 0.6494 - g_loss: 0.8639
Epoch 6/50
256/256 [==============================] - 76s 296ms/step - d_loss: 0.6593 - g_loss: 0.8419
Epoch 7/50
256/256 [==============================] - 76s 295ms/step - d_loss: 0.6599 - g_loss: 0.8535
Epoch 8/50
256/256 [==============================] - 76s 295ms/step - d_loss: 0.6635 - g_loss: 0.8274
Epoch 9/50
256/256 [==============================] - 76s 295ms/step - d_loss: 0.6702 - g_loss: 0.8275
Epoch 10/50
256/256 [==============================] - 76s 295ms/step - d_loss: 0.6626 - g_loss: 0.8245
Epoch 11/50
256/256 [==============================] - 76s 295ms/step - d_loss: 0.6563 - g_loss: 0.8421
Epoch 12/50
256/256 [==============================] - 76s 295ms/step - d_loss: 0.6511 - g_loss: 0.8459
Epoch 13/50
256/256 [==============================] - 76s 295ms/step - d_loss: 0.6559 - g_loss: 0.8418
Epoch 14/50
256/256 [==============================] - 76s 295ms/step - d_loss: 0.6523 - g_loss: 0.8583
Epoch 15/50
256/256 [==============================] - 76s 295ms/step - d_loss: 0.6485 - g_loss: 0.8504
Epoch 16/50
256/256 [==============================] - 75s 294ms/step - d_loss: 0.6460 - g_loss: 0.8543
Epoch 17/50
256/256 [==============================] - 75s 294ms/step - d_loss: 0.6379 - g_loss: 0.8568
Epoch 18/50
256/256 [==============================] - 75s 294ms/step - d_loss: 0.6470 - g_loss: 0.8710
Epoch 19/50
256/256 [==============================] - 75s 294ms/step - d_loss: 0.6403 - g_loss: 0.8671
Epoch 20/50
256/256 [==============================] - 75s 294ms/step - d_loss: 0.6339 - g_loss: 0.8769
Epoch 21/50
256/256 [==============================] - 75s 294ms/step - d_loss: 0.6403 - g_loss: 0.8622
Epoch 22/50
256/256 [==============================] - 85s 333ms/step - d_loss: 0.6313 - g_loss: 0.8721
Epoch 23/50
256/256 [==============================] - 94s 365ms/step - d_loss: 0.6308 - g_loss: 0.8851
Epoch 24/50
256/256 [==============================] - 99s 385ms/step - d_loss: 0.6351 - g_loss: 0.8740
Epoch 25/50
256/256 [==============================] - 99s 385ms/step - d_loss: 0.6282 - g_loss: 0.8865
Epoch 26/50
256/256 [==============================] - 97s 380ms/step - d_loss: 0.6311 - g_loss: 0.8853
Epoch 27/50
256/256 [==============================] - 97s 380ms/step - d_loss: 0.6366 - g_loss: 0.9046
Epoch 28/50
256/256 [==============================] - 96s 376ms/step - d_loss: 0.6255 - g_loss: 0.8926
Epoch 29/50
256/256 [==============================] - 88s 341ms/step - d_loss: 0.6278 - g_loss: 0.9053
Epoch 30/50
256/256 [==============================] - 84s 327ms/step - d_loss: 0.6245 - g_loss: 0.9109
Epoch 31/50
256/256 [==============================] - 85s 331ms/step - d_loss: 0.6237 - g_loss: 0.9263
Epoch 32/50
256/256 [==============================] - 85s 332ms/step - d_loss: 0.6201 - g_loss: 0.9081
Epoch 33/50
256/256 [==============================] - 89s 348ms/step - d_loss: 0.6233 - g_loss: 0.9179
Epoch 34/50
256/256 [==============================] - 87s 338ms/step - d_loss: 0.6277 - g_loss: 0.9198
Epoch 35/50
256/256 [==============================] - 81s 316ms/step - d_loss: 0.6261 - g_loss: 0.9068
Epoch 36/50
256/256 [==============================] - 87s 338ms/step - d_loss: 0.6199 - g_loss: 0.9074
Epoch 37/50
256/256 [==============================] - 89s 347ms/step - d_loss: 0.6200 - g_loss: 0.9090
Epoch 38/50
256/256 [==============================] - 84s 326ms/step - d_loss: 0.6339 - g_loss: 0.9076
Epoch 39/50
256/256 [==============================] - 87s 341ms/step - d_loss: 0.6228 - g_loss: 0.9299
Epoch 40/50
256/256 [==============================] - 83s 323ms/step - d_loss: 0.6219 - g_loss: 0.9137
Epoch 41/50
256/256 [==============================] - 79s 307ms/step - d_loss: 0.6259 - g_loss: 0.9283
Epoch 42/50
256/256 [==============================] - 79s 309ms/step - d_loss: 0.6205 - g_loss: 0.9043
Epoch 43/50
256/256 [==============================] - 79s 307ms/step - d_loss: 0.6265 - g_loss: 0.9050
Epoch 44/50
256/256 [==============================] - 89s 346ms/step - d_loss: 0.6297 - g_loss: 0.8979
Epoch 45/50
256/256 [==============================] - 85s 332ms/step - d_loss: 0.6342 - g_loss: 0.8957
Epoch 46/50
256/256 [==============================] - 87s 338ms/step - d_loss: 0.6362 - g_loss: 0.8932
Epoch 47/50
256/256 [==============================] - 85s 331ms/step - d_loss: 0.6381 - g_loss: 0.9010
Epoch 48/50
256/256 [==============================] - 86s 336ms/step - d_loss: 0.6453 - g_loss: 0.8848
Epoch 49/50
256/256 [==============================] - 85s 332ms/step - d_loss: 0.6432 - g_loss: 0.8829
Epoch 50/50
256/256 [==============================] - 89s 348ms/step - d_loss: 0.6467 - g_loss: 0.8783

Model 3 Outputs:

In [25]:
plot_loss()
show_imgs(model)
Lowest Generator Loss:  9
Highest Discriminator Loss:  8
In [ ]:
save()

Result Experiment 3

The results of this experiment indicate that the loss values of the generator and discriminator did not reach the same levels as in experiment 2, with a difference of (-0.0297/+0.0935) respectively. Despite this, the final image outputs have a much clearer definition compared to experiment 2. This is likely due to the higher capacity of the model allowing for the discovery of more representative features. However, this increase in capacity also seems to have caused a generic pattern. As such, the next experiment will involve increasing the model's units as per experiment 2, but at a lower factor than in experiment 3.

Experiment 4

Increase model capacity of model 2 by increasing Conv2D Layers by 1.5x,

In [16]:
#Change discriminator and generator units factor of 1.5 from original 
def create_generator(latent_dim):
  generator=Sequential()
  generator.add(Dense(4*4*512,input_shape=[latent_dim]))
  generator.add(Reshape([4,4,512]))
  generator.add(Conv2DTranspose(192, kernel_size=4, strides=2, padding="same"))
  generator.add(LeakyReLU(alpha=0.2))
  generator.add(BatchNormalization())
  generator.add(Conv2DTranspose(384, kernel_size=4, strides=2, padding="same"))
  generator.add(LeakyReLU(alpha=0.2))
  generator.add(BatchNormalization())
  generator.add(Conv2DTranspose(758, kernel_size=4, strides=2, padding="same"))
  generator.add(LeakyReLU(alpha=0.2))
  generator.add(BatchNormalization())
  generator.add(Conv2DTranspose(3, kernel_size=4, strides=2, padding="same", activation='sigmoid'))
  return generator

generator = create_generator(LATENT_DIM)
generator.summary()


def create_discriminator(input_shape):
  discriminator=Sequential()
  discriminator.add(Conv2D(96, kernel_size=4, strides=2, padding="same",input_shape=input_shape))
  discriminator.add(LeakyReLU(0.2))
  discriminator.add(BatchNormalization())
  discriminator.add(Conv2D(192, kernel_size=4, strides=2, padding="same"))
  discriminator.add(LeakyReLU(0.2))
  discriminator.add(BatchNormalization())
  discriminator.add(Conv2D(384, kernel_size=4, strides=2, padding="same"))
  discriminator.add(LeakyReLU(0.2))
  discriminator.add(Flatten())
  discriminator.add(Dropout(0.2))
  discriminator.add(Dense(1,activation='sigmoid'))
  return discriminator

input_shape = (64, 64, 3)
discriminator = create_discriminator(input_shape)
discriminator.summary()
Model: "sequential_2"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 dense_2 (Dense)             (None, 8192)              1056768   
                                                                 
 reshape_1 (Reshape)         (None, 4, 4, 512)         0         
                                                                 
 conv2d_transpose_4 (Conv2DT  (None, 8, 8, 192)        1573056   
 ranspose)                                                       
                                                                 
 leaky_re_lu_6 (LeakyReLU)   (None, 8, 8, 192)         0         
                                                                 
 batch_normalization_5 (Batc  (None, 8, 8, 192)        768       
 hNormalization)                                                 
                                                                 
 conv2d_transpose_5 (Conv2DT  (None, 16, 16, 384)      1180032   
 ranspose)                                                       
                                                                 
 leaky_re_lu_7 (LeakyReLU)   (None, 16, 16, 384)       0         
                                                                 
 batch_normalization_6 (Batc  (None, 16, 16, 384)      1536      
 hNormalization)                                                 
                                                                 
 conv2d_transpose_6 (Conv2DT  (None, 32, 32, 758)      4657910   
 ranspose)                                                       
                                                                 
 leaky_re_lu_8 (LeakyReLU)   (None, 32, 32, 758)       0         
                                                                 
 batch_normalization_7 (Batc  (None, 32, 32, 758)      3032      
 hNormalization)                                                 
                                                                 
 conv2d_transpose_7 (Conv2DT  (None, 64, 64, 3)        36387     
 ranspose)                                                       
                                                                 
=================================================================
Total params: 8,509,489
Trainable params: 8,506,821
Non-trainable params: 2,668
_________________________________________________________________
Model: "sequential_3"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 conv2d_3 (Conv2D)           (None, 32, 32, 96)        4704      
                                                                 
 leaky_re_lu_9 (LeakyReLU)   (None, 32, 32, 96)        0         
                                                                 
 batch_normalization_8 (Batc  (None, 32, 32, 96)       384       
 hNormalization)                                                 
                                                                 
 conv2d_4 (Conv2D)           (None, 16, 16, 192)       295104    
                                                                 
 leaky_re_lu_10 (LeakyReLU)  (None, 16, 16, 192)       0         
                                                                 
 batch_normalization_9 (Batc  (None, 16, 16, 192)      768       
 hNormalization)                                                 
                                                                 
 conv2d_5 (Conv2D)           (None, 8, 8, 384)         1180032   
                                                                 
 leaky_re_lu_11 (LeakyReLU)  (None, 8, 8, 384)         0         
                                                                 
 flatten_1 (Flatten)         (None, 24576)             0         
                                                                 
 dropout_1 (Dropout)         (None, 24576)             0         
                                                                 
 dense_3 (Dense)             (None, 1)                 24577     
                                                                 
=================================================================
Total params: 1,505,569
Trainable params: 1,504,993
Non-trainable params: 576
_________________________________________________________________
In [12]:
model = make_model(2e-5)
history = train(model,40)
Epoch 1/40
256/256 [==============================] - 78s 254ms/step - d_loss: 0.4945 - g_loss: 1.1522
Epoch 2/40
256/256 [==============================] - 59s 230ms/step - d_loss: 0.3261 - g_loss: 1.9410
Epoch 3/40
256/256 [==============================] - 61s 238ms/step - d_loss: 0.2154 - g_loss: 2.7124
Epoch 4/40
256/256 [==============================] - 55s 216ms/step - d_loss: 0.3163 - g_loss: 2.4316
Epoch 5/40
256/256 [==============================] - 55s 215ms/step - d_loss: 0.3238 - g_loss: 1.9457
Epoch 6/40
256/256 [==============================] - 55s 216ms/step - d_loss: 0.5730 - g_loss: 1.1287
Epoch 7/40
256/256 [==============================] - 55s 216ms/step - d_loss: 0.6336 - g_loss: 1.0838
Epoch 8/40
256/256 [==============================] - 55s 216ms/step - d_loss: 0.5735 - g_loss: 1.0192
Epoch 9/40
256/256 [==============================] - 55s 216ms/step - d_loss: 0.5844 - g_loss: 1.0626
Epoch 10/40
256/256 [==============================] - 55s 216ms/step - d_loss: 0.5792 - g_loss: 1.1123
Epoch 11/40
256/256 [==============================] - 55s 215ms/step - d_loss: 0.5921 - g_loss: 1.0088
Epoch 12/40
256/256 [==============================] - 55s 216ms/step - d_loss: 0.6599 - g_loss: 0.9875
Epoch 13/40
256/256 [==============================] - 55s 216ms/step - d_loss: 0.5861 - g_loss: 0.9693
Epoch 14/40
256/256 [==============================] - 55s 216ms/step - d_loss: 0.5992 - g_loss: 1.1660
Epoch 15/40
256/256 [==============================] - 55s 216ms/step - d_loss: 0.5514 - g_loss: 1.0080
Epoch 16/40
256/256 [==============================] - 55s 216ms/step - d_loss: 0.6198 - g_loss: 1.0304
Epoch 17/40
256/256 [==============================] - 55s 216ms/step - d_loss: 0.5625 - g_loss: 1.0230
Epoch 18/40
256/256 [==============================] - 55s 216ms/step - d_loss: 0.6116 - g_loss: 1.0267
Epoch 19/40
256/256 [==============================] - 55s 215ms/step - d_loss: 0.6164 - g_loss: 0.9444
Epoch 20/40
256/256 [==============================] - 55s 213ms/step - d_loss: 0.6256 - g_loss: 0.9104
Epoch 21/40
256/256 [==============================] - 53s 207ms/step - d_loss: 0.6673 - g_loss: 0.8683
Epoch 22/40
256/256 [==============================] - 53s 206ms/step - d_loss: 0.6937 - g_loss: 0.8519
Epoch 23/40
256/256 [==============================] - 53s 207ms/step - d_loss: 0.6571 - g_loss: 0.9162
Epoch 24/40
256/256 [==============================] - 53s 206ms/step - d_loss: 0.6372 - g_loss: 0.9854
Epoch 25/40
256/256 [==============================] - 53s 207ms/step - d_loss: 0.6304 - g_loss: 0.8750
Epoch 26/40
256/256 [==============================] - 53s 207ms/step - d_loss: 0.6305 - g_loss: 0.8924
Epoch 27/40
256/256 [==============================] - 53s 206ms/step - d_loss: 0.6417 - g_loss: 0.9077
Epoch 28/40
256/256 [==============================] - 53s 206ms/step - d_loss: 0.6887 - g_loss: 0.8047
Epoch 29/40
256/256 [==============================] - 53s 206ms/step - d_loss: 0.6664 - g_loss: 0.8171
Epoch 30/40
256/256 [==============================] - 53s 206ms/step - d_loss: 0.6647 - g_loss: 0.8236
Epoch 31/40
256/256 [==============================] - 53s 206ms/step - d_loss: 0.6664 - g_loss: 0.8293
Epoch 32/40
256/256 [==============================] - 53s 206ms/step - d_loss: 0.6401 - g_loss: 0.8755
Epoch 33/40
256/256 [==============================] - 53s 206ms/step - d_loss: 0.6502 - g_loss: 0.8920
Epoch 34/40
256/256 [==============================] - 53s 206ms/step - d_loss: 0.6613 - g_loss: 0.8354
Epoch 35/40
256/256 [==============================] - 53s 206ms/step - d_loss: 0.6489 - g_loss: 0.8735
Epoch 36/40
256/256 [==============================] - 53s 206ms/step - d_loss: 0.6484 - g_loss: 0.8476
Epoch 37/40
256/256 [==============================] - 53s 206ms/step - d_loss: 0.6526 - g_loss: 0.8506
Epoch 38/40
256/256 [==============================] - 53s 206ms/step - d_loss: 0.6664 - g_loss: 0.8286
Epoch 39/40
256/256 [==============================] - 53s 206ms/step - d_loss: 0.6594 - g_loss: 0.8188
Epoch 40/40
256/256 [==============================] - 53s 206ms/step - d_loss: 0.6594 - g_loss: 0.8261
In [13]:
plot_loss()
show_imgs(model)
Lowest Generator Loss:  27
Highest Discriminator Loss:  21
In [14]:
#Change Save Paths
generator_path = basedir / "generator_flower_gan_model4.h5"
discriminator_path = basedir / "discriminator_flower_gan_model4.h5"
save()
In [19]:
#Further 50 Epochs
history = train(model,50)
Epoch 1/50
256/256 [==============================] - 56s 216ms/step - d_loss: 0.6590 - g_loss: 0.8395
Epoch 2/50
256/256 [==============================] - 51s 200ms/step - d_loss: 0.6675 - g_loss: 0.8300
Epoch 3/50
256/256 [==============================] - 51s 199ms/step - d_loss: 0.6611 - g_loss: 0.8349
Epoch 4/50
256/256 [==============================] - 51s 200ms/step - d_loss: 0.6583 - g_loss: 0.8284
Epoch 5/50
256/256 [==============================] - 54s 212ms/step - d_loss: 0.6526 - g_loss: 0.8356
Epoch 6/50
256/256 [==============================] - 52s 203ms/step - d_loss: 0.6571 - g_loss: 0.8278
Epoch 7/50
256/256 [==============================] - 59s 230ms/step - d_loss: 0.6565 - g_loss: 0.8288
Epoch 8/50
256/256 [==============================] - 58s 228ms/step - d_loss: 0.6575 - g_loss: 0.8374
Epoch 9/50
256/256 [==============================] - 58s 225ms/step - d_loss: 0.6569 - g_loss: 0.8463
Epoch 10/50
256/256 [==============================] - 57s 223ms/step - d_loss: 0.6620 - g_loss: 0.8265
Epoch 11/50
256/256 [==============================] - 61s 239ms/step - d_loss: 0.6563 - g_loss: 0.8301
Epoch 12/50
256/256 [==============================] - 58s 225ms/step - d_loss: 0.6626 - g_loss: 0.8198
Epoch 13/50
256/256 [==============================] - 58s 224ms/step - d_loss: 0.6642 - g_loss: 0.8213
Epoch 14/50
256/256 [==============================] - 58s 224ms/step - d_loss: 0.6574 - g_loss: 0.8285
Epoch 15/50
256/256 [==============================] - 60s 232ms/step - d_loss: 0.6547 - g_loss: 0.8251
Epoch 16/50
256/256 [==============================] - 63s 244ms/step - d_loss: 0.6555 - g_loss: 0.8417
Epoch 17/50
256/256 [==============================] - 60s 233ms/step - d_loss: 0.6508 - g_loss: 0.8387
Epoch 18/50
256/256 [==============================] - 60s 232ms/step - d_loss: 0.6507 - g_loss: 0.8417
Epoch 19/50
256/256 [==============================] - 61s 238ms/step - d_loss: 0.6487 - g_loss: 0.8509
Epoch 20/50
256/256 [==============================] - 61s 238ms/step - d_loss: 0.6588 - g_loss: 0.8510
Epoch 21/50
256/256 [==============================] - 56s 219ms/step - d_loss: 0.6510 - g_loss: 0.8669
Epoch 22/50
256/256 [==============================] - 57s 224ms/step - d_loss: 0.6434 - g_loss: 0.8531
Epoch 23/50
256/256 [==============================] - 62s 241ms/step - d_loss: 0.6494 - g_loss: 0.8401
Epoch 24/50
256/256 [==============================] - 65s 253ms/step - d_loss: 0.6445 - g_loss: 0.8422
Epoch 25/50
256/256 [==============================] - 63s 245ms/step - d_loss: 0.6479 - g_loss: 0.8512
Epoch 26/50
256/256 [==============================] - 60s 233ms/step - d_loss: 0.6438 - g_loss: 0.8429
Epoch 27/50
256/256 [==============================] - 55s 213ms/step - d_loss: 0.6471 - g_loss: 0.8527
Epoch 28/50
256/256 [==============================] - 54s 210ms/step - d_loss: 0.6389 - g_loss: 0.8661
Epoch 29/50
256/256 [==============================] - 54s 209ms/step - d_loss: 0.6468 - g_loss: 0.8725
Epoch 30/50
256/256 [==============================] - 58s 228ms/step - d_loss: 0.6362 - g_loss: 0.8480
Epoch 31/50
256/256 [==============================] - 61s 239ms/step - d_loss: 0.6407 - g_loss: 0.8605
Epoch 32/50
256/256 [==============================] - 59s 229ms/step - d_loss: 0.6413 - g_loss: 0.8687
Epoch 33/50
256/256 [==============================] - 59s 229ms/step - d_loss: 0.6422 - g_loss: 0.8702
Epoch 34/50
256/256 [==============================] - 60s 236ms/step - d_loss: 0.6463 - g_loss: 0.8595
Epoch 35/50
256/256 [==============================] - 62s 243ms/step - d_loss: 0.6479 - g_loss: 0.8458
Epoch 36/50
256/256 [==============================] - 61s 236ms/step - d_loss: 0.6449 - g_loss: 0.8770
Epoch 37/50
256/256 [==============================] - 61s 238ms/step - d_loss: 0.6472 - g_loss: 0.8644
Epoch 38/50
256/256 [==============================] - 59s 231ms/step - d_loss: 0.6516 - g_loss: 0.8513
Epoch 39/50
256/256 [==============================] - 61s 237ms/step - d_loss: 0.6526 - g_loss: 0.8466
Epoch 40/50
256/256 [==============================] - 61s 239ms/step - d_loss: 0.6591 - g_loss: 0.8581
Epoch 41/50
256/256 [==============================] - 60s 233ms/step - d_loss: 0.6590 - g_loss: 0.8670
Epoch 42/50
256/256 [==============================] - 59s 230ms/step - d_loss: 0.6609 - g_loss: 0.8364
Epoch 43/50
256/256 [==============================] - 60s 235ms/step - d_loss: 0.6585 - g_loss: 0.8506
Epoch 44/50
256/256 [==============================] - 59s 231ms/step - d_loss: 0.6615 - g_loss: 0.8405
Epoch 45/50
256/256 [==============================] - 57s 220ms/step - d_loss: 0.6553 - g_loss: 0.8518
Epoch 46/50
256/256 [==============================] - 55s 213ms/step - d_loss: 0.6632 - g_loss: 0.8619
Epoch 47/50
256/256 [==============================] - 53s 208ms/step - d_loss: 0.6622 - g_loss: 0.8209
Epoch 48/50
256/256 [==============================] - 54s 209ms/step - d_loss: 0.6679 - g_loss: 0.8460
Epoch 49/50
256/256 [==============================] - 53s 208ms/step - d_loss: 0.6642 - g_loss: 0.8458
Epoch 50/50
256/256 [==============================] - 54s 211ms/step - d_loss: 0.6669 - g_loss: 0.8196
In [20]:
plot_loss()
show_imgs(model)
save()
Lowest Generator Loss:  9
Highest Discriminator Loss:  47

Results Experiment 4

Parameter Change: | Algorithm | Experiment 3 Trainable Parameters | Experiment 4 Trainable Parameters | Difference | | --- | --- | --- | --- | | Generator | 13,697,795 | 8,509,489 | 5,188,306 | | Discriminator | 2,662,785 | 1,505,569 | 1,157216 |

3:0.6467 - g_loss: 0.8783

4:0.6669 - g_loss: 0.8196

The results of the study on the generator and discriminator indicate an improvement in performance, as evidenced by the change in loss of +0.0202/-0.0587 compared to experiment 3. Despite this improvement, the model still lacks definition and produces images with a higher level of vibrancy than the previous model. The experiment as a whole suggests that increasing the capacity of the model is necessary for achieving better image output, which is the primary focus of this investigation. This is supported by the fact that the loss values in experiment 1 were lower at (0.6669 / 0.8196) than in the current experiment.

Experiment 5

Alter model 3 to add L2 regularization, reload weights and train further

In [18]:
#Add L2 Regularization
from keras import regularizers

def create_generator(latent_dim):
    generator=Sequential()
    generator.add(Dense(4*4*512, input_shape=[latent_dim], kernel_regularizer=regularizers.l2(0.001)))
    generator.add(Reshape([4,4,512]))
    generator.add(Conv2DTranspose(256, kernel_size=4, strides=2, padding="same", kernel_regularizer=regularizers.l2(0.001)))
    generator.add(LeakyReLU(alpha=0.2))
    generator.add(BatchNormalization())
    generator.add(Conv2DTranspose(512, kernel_size=4, strides=2, padding="same", kernel_regularizer=regularizers.l2(0.001)))
    generator.add(LeakyReLU(alpha=0.2))
    generator.add(BatchNormalization())
    generator.add(Conv2DTranspose(1024, kernel_size=4, strides=2, padding="same", kernel_regularizer=regularizers.l2(0.001)))
    generator.add(LeakyReLU(alpha=0.2))
    generator.add(BatchNormalization())
    generator.add(Conv2DTranspose(3, kernel_size=4, strides=2, padding="same", activation='sigmoid', kernel_regularizer=regularizers.l2(0.001)))
    return generator

generator = create_generator(LATENT_DIM)
generator.summary()



def create_discriminator(input_shape):
    discriminator=Sequential()
    discriminator.add(Conv2D(128, kernel_size=4, strides=2, padding="same",input_shape=input_shape, kernel_regularizer=regularizers.l2(0.001)))
    discriminator.add(LeakyReLU(0.2))
    discriminator.add(BatchNormalization())
    discriminator.add(Conv2D(256, kernel_size=4, strides=2, padding="same", kernel_regularizer=regularizers.l2(0.001)))
    discriminator.add(LeakyReLU(0.2))
    discriminator.add(BatchNormalization())
    discriminator.add(Conv2D(512, kernel_size=4, strides=2, padding="same", kernel_regularizer=regularizers.l2(0.001)))
    discriminator.add(LeakyReLU(0.2))
    discriminator.add(Flatten())
    discriminator.add(Dropout(0.2))
    discriminator.add(Dense(1,activation='sigmoid', kernel_regularizer=regularizers.l2(0.001)))
    return discriminator

input_shape = (64, 64, 3)
discriminator = create_discriminator(input_shape)
discriminator.summary()
Model: "sequential_10"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 dense_10 (Dense)            (None, 8192)              1056768   
                                                                 
 reshape_5 (Reshape)         (None, 4, 4, 512)         0         
                                                                 
 conv2d_transpose_20 (Conv2D  (None, 8, 8, 256)        2097408   
 Transpose)                                                      
                                                                 
 leaky_re_lu_30 (LeakyReLU)  (None, 8, 8, 256)         0         
                                                                 
 batch_normalization_25 (Bat  (None, 8, 8, 256)        1024      
 chNormalization)                                                
                                                                 
 conv2d_transpose_21 (Conv2D  (None, 16, 16, 512)      2097664   
 Transpose)                                                      
                                                                 
 leaky_re_lu_31 (LeakyReLU)  (None, 16, 16, 512)       0         
                                                                 
 batch_normalization_26 (Bat  (None, 16, 16, 512)      2048      
 chNormalization)                                                
                                                                 
 conv2d_transpose_22 (Conv2D  (None, 32, 32, 1024)     8389632   
 Transpose)                                                      
                                                                 
 leaky_re_lu_32 (LeakyReLU)  (None, 32, 32, 1024)      0         
                                                                 
 batch_normalization_27 (Bat  (None, 32, 32, 1024)     4096      
 chNormalization)                                                
                                                                 
 conv2d_transpose_23 (Conv2D  (None, 64, 64, 3)        49155     
 Transpose)                                                      
                                                                 
=================================================================
Total params: 13,697,795
Trainable params: 13,694,211
Non-trainable params: 3,584
_________________________________________________________________
Model: "sequential_11"
_________________________________________________________________
 Layer (type)                Output Shape              Param #   
=================================================================
 conv2d_15 (Conv2D)          (None, 32, 32, 128)       6272      
                                                                 
 leaky_re_lu_33 (LeakyReLU)  (None, 32, 32, 128)       0         
                                                                 
 batch_normalization_28 (Bat  (None, 32, 32, 128)      512       
 chNormalization)                                                
                                                                 
 conv2d_16 (Conv2D)          (None, 16, 16, 256)       524544    
                                                                 
 leaky_re_lu_34 (LeakyReLU)  (None, 16, 16, 256)       0         
                                                                 
 batch_normalization_29 (Bat  (None, 16, 16, 256)      1024      
 chNormalization)                                                
                                                                 
 conv2d_17 (Conv2D)          (None, 8, 8, 512)         2097664   
                                                                 
 leaky_re_lu_35 (LeakyReLU)  (None, 8, 8, 512)         0         
                                                                 
 flatten_5 (Flatten)         (None, 32768)             0         
                                                                 
 dropout_5 (Dropout)         (None, 32768)             0         
                                                                 
 dense_11 (Dense)            (None, 1)                 32769     
                                                                 
=================================================================
Total params: 2,662,785
Trainable params: 2,662,017
Non-trainable params: 768
_________________________________________________________________
In [19]:
#Change Load Paths To Model 3
generator_path = basedir / "generator_flower_gan_model3.h5"
discriminator_path = basedir / "discriminator_flower_gan_model3.h5"
load()
Out[19]:
<__main__.GAN at 0x1f1ab091760>
In [20]:
discriminator_opt = tf.keras.optimizers.Adam(2e-5,0.5)
generator_opt = tf.keras.optimizers.Adam(2e-5,0.5)
loss_fn = tf.keras.losses.BinaryCrossentropy()
model = load()
model.compile(d_optimizer=discriminator_opt, g_optimizer=generator_opt, loss_fn=loss_fn)
history = train(model,200)
Epoch 1/200
256/256 [==============================] - 89s 344ms/step - d_loss: 0.6409 - g_loss: 0.8801
Epoch 2/200
256/256 [==============================] - 78s 303ms/step - d_loss: 0.6535 - g_loss: 0.8830
Epoch 3/200
256/256 [==============================] - 76s 296ms/step - d_loss: 0.6570 - g_loss: 0.8513
Epoch 4/200
256/256 [==============================] - 81s 314ms/step - d_loss: 0.6536 - g_loss: 0.8660
Epoch 5/200
256/256 [==============================] - 82s 321ms/step - d_loss: 0.6546 - g_loss: 0.8635
Epoch 6/200
256/256 [==============================] - 82s 320ms/step - d_loss: 0.6534 - g_loss: 0.8463
Epoch 7/200
256/256 [==============================] - 82s 321ms/step - d_loss: 0.6646 - g_loss: 0.8463
Epoch 8/200
256/256 [==============================] - 82s 321ms/step - d_loss: 0.6624 - g_loss: 0.8307
Epoch 9/200
256/256 [==============================] - 82s 321ms/step - d_loss: 0.6665 - g_loss: 0.8316
Epoch 10/200
256/256 [==============================] - 82s 321ms/step - d_loss: 0.6626 - g_loss: 0.8281
Epoch 11/200
256/256 [==============================] - 82s 321ms/step - d_loss: 0.6548 - g_loss: 0.8449
Epoch 12/200
256/256 [==============================] - 82s 321ms/step - d_loss: 0.6608 - g_loss: 0.8308
Epoch 13/200
256/256 [==============================] - 84s 329ms/step - d_loss: 0.6558 - g_loss: 0.8593
Epoch 14/200
256/256 [==============================] - 84s 326ms/step - d_loss: 0.6542 - g_loss: 0.8429
Epoch 15/200
256/256 [==============================] - 84s 326ms/step - d_loss: 0.6461 - g_loss: 0.8513
Epoch 16/200
256/256 [==============================] - 84s 326ms/step - d_loss: 0.6432 - g_loss: 0.8734
Epoch 17/200
256/256 [==============================] - 84s 326ms/step - d_loss: 0.6440 - g_loss: 0.8542
Epoch 18/200
256/256 [==============================] - 83s 326ms/step - d_loss: 0.6433 - g_loss: 0.8647
Epoch 19/200
256/256 [==============================] - 83s 323ms/step - d_loss: 0.6462 - g_loss: 0.8528
Epoch 20/200
256/256 [==============================] - 84s 326ms/step - d_loss: 0.6402 - g_loss: 0.8618
Epoch 21/200
256/256 [==============================] - 83s 324ms/step - d_loss: 0.6390 - g_loss: 0.8827
Epoch 22/200
256/256 [==============================] - 83s 324ms/step - d_loss: 0.6363 - g_loss: 0.8809
Epoch 23/200
256/256 [==============================] - 83s 324ms/step - d_loss: 0.6318 - g_loss: 0.8791
Epoch 24/200
256/256 [==============================] - 81s 315ms/step - d_loss: 0.6388 - g_loss: 0.8761
Epoch 25/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6301 - g_loss: 0.8809
Epoch 26/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6368 - g_loss: 0.8838
Epoch 27/200
256/256 [==============================] - 87s 340ms/step - d_loss: 0.6311 - g_loss: 0.8877
Epoch 28/200
256/256 [==============================] - 87s 338ms/step - d_loss: 0.6273 - g_loss: 0.9080
Epoch 29/200
256/256 [==============================] - 83s 325ms/step - d_loss: 0.6315 - g_loss: 0.8938
Epoch 30/200
256/256 [==============================] - 83s 325ms/step - d_loss: 0.6227 - g_loss: 0.9121
Epoch 31/200
256/256 [==============================] - 84s 326ms/step - d_loss: 0.6234 - g_loss: 0.8932
Epoch 32/200
256/256 [==============================] - 84s 327ms/step - d_loss: 0.6328 - g_loss: 0.8867
Epoch 33/200
256/256 [==============================] - 83s 325ms/step - d_loss: 0.6308 - g_loss: 0.8941
Epoch 34/200
256/256 [==============================] - 84s 327ms/step - d_loss: 0.6277 - g_loss: 0.9001
Epoch 35/200
256/256 [==============================] - 84s 328ms/step - d_loss: 0.6210 - g_loss: 0.9019
Epoch 36/200
256/256 [==============================] - 83s 326ms/step - d_loss: 0.6233 - g_loss: 0.9122
Epoch 37/200
256/256 [==============================] - 84s 327ms/step - d_loss: 0.6256 - g_loss: 0.9330
Epoch 38/200
256/256 [==============================] - 84s 327ms/step - d_loss: 0.6203 - g_loss: 0.9112
Epoch 39/200
256/256 [==============================] - 81s 314ms/step - d_loss: 0.6322 - g_loss: 0.9033
Epoch 40/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6339 - g_loss: 0.9001
Epoch 41/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6304 - g_loss: 0.8886
Epoch 42/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6447 - g_loss: 0.9035
Epoch 43/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6381 - g_loss: 0.8799
Epoch 44/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6401 - g_loss: 0.8805
Epoch 45/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6453 - g_loss: 0.9060
Epoch 46/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6403 - g_loss: 0.8778
Epoch 47/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6413 - g_loss: 0.8652
Epoch 48/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6461 - g_loss: 0.8868
Epoch 49/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6500 - g_loss: 0.8656
Epoch 50/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6554 - g_loss: 0.8750
Epoch 51/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6631 - g_loss: 0.8669
Epoch 52/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6580 - g_loss: 0.8426
Epoch 53/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6598 - g_loss: 0.8414
Epoch 54/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6687 - g_loss: 0.8598
Epoch 55/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6691 - g_loss: 0.8495
Epoch 56/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6667 - g_loss: 0.8264
Epoch 57/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6661 - g_loss: 0.8221
Epoch 58/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6679 - g_loss: 0.8268
Epoch 59/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6711 - g_loss: 0.8332
Epoch 60/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6702 - g_loss: 0.8316
Epoch 61/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6694 - g_loss: 0.8381
Epoch 62/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6704 - g_loss: 0.8383
Epoch 63/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6679 - g_loss: 0.8209
Epoch 64/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6712 - g_loss: 0.8281
Epoch 65/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6705 - g_loss: 0.8216
Epoch 66/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6700 - g_loss: 0.8459
Epoch 67/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6672 - g_loss: 0.8175
Epoch 68/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6722 - g_loss: 0.8399
Epoch 69/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6682 - g_loss: 0.8249
Epoch 70/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6730 - g_loss: 0.8474
Epoch 71/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6669 - g_loss: 0.8343
Epoch 72/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6660 - g_loss: 0.8473
Epoch 73/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6622 - g_loss: 0.8317
Epoch 74/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6652 - g_loss: 0.8292
Epoch 75/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6700 - g_loss: 0.8562
Epoch 76/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6651 - g_loss: 0.8216
Epoch 77/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6640 - g_loss: 0.8236
Epoch 78/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6637 - g_loss: 0.8327
Epoch 79/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6610 - g_loss: 0.8382
Epoch 80/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6651 - g_loss: 0.8248
Epoch 81/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6614 - g_loss: 0.8331
Epoch 82/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6617 - g_loss: 0.8414
Epoch 83/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6629 - g_loss: 0.8333
Epoch 84/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6595 - g_loss: 0.8417
Epoch 85/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6625 - g_loss: 0.8280
Epoch 86/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6633 - g_loss: 0.8375
Epoch 87/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6611 - g_loss: 0.8591
Epoch 88/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6561 - g_loss: 0.8369
Epoch 89/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6616 - g_loss: 0.8417
Epoch 90/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6637 - g_loss: 0.8508
Epoch 91/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6595 - g_loss: 0.8517
Epoch 92/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6608 - g_loss: 0.8428
Epoch 93/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6594 - g_loss: 0.8336
Epoch 94/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6590 - g_loss: 0.8315
Epoch 95/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6583 - g_loss: 0.8333
Epoch 96/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6594 - g_loss: 0.8303
Epoch 97/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6546 - g_loss: 0.8348
Epoch 98/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6584 - g_loss: 0.8405
Epoch 99/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6624 - g_loss: 0.8645
Epoch 100/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6599 - g_loss: 0.8498
Epoch 101/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6575 - g_loss: 0.8332
Epoch 102/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6595 - g_loss: 0.8275
Epoch 103/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6601 - g_loss: 0.8315
Epoch 104/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6621 - g_loss: 0.8393
Epoch 105/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6613 - g_loss: 0.8388
Epoch 106/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6625 - g_loss: 0.8495
Epoch 107/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6603 - g_loss: 0.8350
Epoch 108/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6644 - g_loss: 0.8724
Epoch 109/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6593 - g_loss: 0.8471
Epoch 110/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6629 - g_loss: 0.8525
Epoch 111/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6621 - g_loss: 0.8371
Epoch 112/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6604 - g_loss: 0.8458
Epoch 113/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6566 - g_loss: 0.8189
Epoch 114/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6601 - g_loss: 0.8423
Epoch 115/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6586 - g_loss: 0.8311
Epoch 116/200
256/256 [==============================] - 77s 300ms/step - d_loss: 0.6577 - g_loss: 0.8303
Epoch 117/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6595 - g_loss: 0.8392
Epoch 118/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6614 - g_loss: 0.8309
Epoch 119/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6645 - g_loss: 0.8471
Epoch 120/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6593 - g_loss: 0.8254
Epoch 121/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6599 - g_loss: 0.8239
Epoch 122/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6663 - g_loss: 0.8523
Epoch 123/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6589 - g_loss: 0.8249
Epoch 124/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6621 - g_loss: 0.8357
Epoch 125/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6637 - g_loss: 0.8553
Epoch 126/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6597 - g_loss: 0.8227
Epoch 127/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6594 - g_loss: 0.8216
Epoch 128/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6622 - g_loss: 0.8458
Epoch 129/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6611 - g_loss: 0.8316
Epoch 130/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6595 - g_loss: 0.8277
Epoch 131/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6591 - g_loss: 0.8186
Epoch 132/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6608 - g_loss: 0.8606
Epoch 133/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6612 - g_loss: 0.8356
Epoch 134/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6608 - g_loss: 0.8295
Epoch 135/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6598 - g_loss: 0.8320
Epoch 136/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6613 - g_loss: 0.8466
Epoch 137/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6592 - g_loss: 0.8302
Epoch 138/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6595 - g_loss: 0.8227
Epoch 139/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6578 - g_loss: 0.8247
Epoch 140/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6602 - g_loss: 0.8335
Epoch 141/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6583 - g_loss: 0.8399
Epoch 142/200
256/256 [==============================] - 82s 320ms/step - d_loss: 0.6583 - g_loss: 0.8295
Epoch 143/200
256/256 [==============================] - 83s 324ms/step - d_loss: 0.6584 - g_loss: 0.8292
Epoch 144/200
256/256 [==============================] - 83s 325ms/step - d_loss: 0.6582 - g_loss: 0.8373
Epoch 145/200
256/256 [==============================] - 84s 327ms/step - d_loss: 0.6581 - g_loss: 0.8312
Epoch 146/200
256/256 [==============================] - 83s 325ms/step - d_loss: 0.6629 - g_loss: 0.8626
Epoch 147/200
256/256 [==============================] - 84s 326ms/step - d_loss: 0.6599 - g_loss: 0.8332
Epoch 148/200
256/256 [==============================] - 83s 326ms/step - d_loss: 0.6570 - g_loss: 0.8281
Epoch 149/200
256/256 [==============================] - 83s 324ms/step - d_loss: 0.6609 - g_loss: 0.8513
Epoch 150/200
256/256 [==============================] - 84s 326ms/step - d_loss: 0.6552 - g_loss: 0.8212
Epoch 151/200
256/256 [==============================] - 83s 325ms/step - d_loss: 0.6567 - g_loss: 0.8353
Epoch 152/200
256/256 [==============================] - 84s 326ms/step - d_loss: 0.6577 - g_loss: 0.8736
Epoch 153/200
256/256 [==============================] - 81s 316ms/step - d_loss: 0.6540 - g_loss: 0.8268
Epoch 154/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6537 - g_loss: 0.8224
Epoch 155/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6584 - g_loss: 0.8702
Epoch 156/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6543 - g_loss: 0.8213
Epoch 157/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6557 - g_loss: 0.8324
Epoch 158/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6582 - g_loss: 0.8548
Epoch 159/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6563 - g_loss: 0.8364
Epoch 160/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6560 - g_loss: 0.8439
Epoch 161/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6544 - g_loss: 0.8352
Epoch 162/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6543 - g_loss: 0.8369
Epoch 163/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6570 - g_loss: 0.8534
Epoch 164/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6517 - g_loss: 0.8356
Epoch 165/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6553 - g_loss: 0.8410
Epoch 166/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6613 - g_loss: 0.8482
Epoch 167/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6536 - g_loss: 0.8325
Epoch 168/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6542 - g_loss: 0.8381
Epoch 169/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6510 - g_loss: 0.8512
Epoch 170/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6519 - g_loss: 0.8435
Epoch 171/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6528 - g_loss: 0.8294
Epoch 172/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6514 - g_loss: 0.8358
Epoch 173/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6551 - g_loss: 0.8475
Epoch 174/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6505 - g_loss: 0.8349
Epoch 175/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6527 - g_loss: 0.8578
Epoch 176/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6516 - g_loss: 0.8475
Epoch 177/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6524 - g_loss: 0.8425
Epoch 178/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6506 - g_loss: 0.8321
Epoch 179/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6532 - g_loss: 0.8427
Epoch 180/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6519 - g_loss: 0.8361
Epoch 181/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6522 - g_loss: 0.8888
Epoch 182/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6506 - g_loss: 0.8510
Epoch 183/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6504 - g_loss: 0.8393
Epoch 184/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6521 - g_loss: 0.8489
Epoch 185/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6469 - g_loss: 0.8348
Epoch 186/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6494 - g_loss: 0.8418
Epoch 187/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6491 - g_loss: 0.8423
Epoch 188/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6497 - g_loss: 0.8499
Epoch 189/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6470 - g_loss: 0.8306
Epoch 190/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6499 - g_loss: 0.8564
Epoch 191/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6472 - g_loss: 0.8417
Epoch 192/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6483 - g_loss: 0.8548
Epoch 193/200
256/256 [==============================] - 79s 307ms/step - d_loss: 0.6499 - g_loss: 0.8500
Epoch 194/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6493 - g_loss: 0.8539
Epoch 195/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6514 - g_loss: 0.8569
Epoch 196/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6476 - g_loss: 0.8386
Epoch 197/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6498 - g_loss: 0.8748
Epoch 198/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6475 - g_loss: 0.8682
Epoch 199/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6482 - g_loss: 0.8561
Epoch 200/200
256/256 [==============================] - 79s 308ms/step - d_loss: 0.6445 - g_loss: 0.8450
In [25]:
plot_loss()
show_imgs(model)
save()
Lowest Generator Loss:  112
Highest Discriminator Loss:  60

Results Experiment 5

After training model 3 on a further 200 epochs, it can bee seen that the lowest generator loss was found at epoch 113 (index change) with a value of 0.8189, still higher than experiment 1/2. The outputs generated by experiment 5 provide an interesting look at the features being identified by the lower layers of the model through the generation of some rather trippy looking alien like flowers.

Conclusion

Throughout the course of this investigation, the objective of producing realistic flower art using GANs has been explored through the tuning of various hyperparameters, such as layer units and learning rate, as well as the application of various regularization techniques. The impact of these changes on the performance of the GAN model was evaluated through the analysis of the loss values of both the generator and discriminator, as well as the examination of the final generated images.

It can be concluded that a comprehensive design process for GAN models has been successfully demonstrated, as evidenced by the high quality of the generated flower images and the insights gained from the experiments conducted. However, it is important to note that further experimentation is necessary to optimize the model's performance. Specifically, it is recommended to employ smaller models and to fine-tune the model's hyperparameters in order to reduce the overall loss, while also scaling up the model's capacity when appropriate.